13 research outputs found

    Single-Scale Fusion: An Effective Approach to Merging Images

    No full text
    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be imple- mented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates impor- tant redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches

    Decolorization by Fusion

    No full text
    Decolorization aims at converting color images into grayscale images while preserving their original contrast and color discriminability. In this paper, we introduce an original fusion-based decolorization approach. Our algorithm employs as inputs the three color channels RR , GG , and BB , and an additional input related to the Helmholtz-Kohlrausch effect. To blend those inputs, we adopt a multiscale fusion strategy to prevent the artifacts arising from a pixel-wise application of weight maps and use several weight maps that respectively control saliency, exposure, and saturation. The new operator has been tested successfully on a large data set of both natural and synthetic images. It is competitive compared with modern optimization-based methods in terms of decolorized image visual quality while offering the advantage of being computationally simple, and temporally consistent when decolorizing video sequences

    Color Channel Compensation (3C): A Fundamental Pre-Processing Step for Image Enhancement

    No full text
    This article introduces a novel solution to improve image enhancement in terms of color appearance. Our approach, called Color Channel Compensation (3C), overcomes artifacts resulting from the severely non-uniform color spectrum distribution encountered in images captured under hazy night-time conditions, underwater, or under non-uniform artificial illumination. Our solution is founded on the observation that, under such adverse conditions, the information contained in at least one color channel is close to completely lost, making the traditional enhancing techniques subject to noise and color shifting. In those cases, our pre-processing method proposes to reconstruct the lost channel based on the opponent color channel. Our algorithm subtracts a local mean from each opponent color pixel. Thereby, it partly recovers the lost color from the two colors (red-green or blue-yellow) involved in the opponent color channel. The proposed approach, whilst simple, is shown to consistently improve the outcome of conventional restoration methods. To prove the utility of our 3C operator, we provide an extensive qualitative and quantitative evaluation for white balancing, image dehazing, and underwater enhancement applications

    Color Balance and Fusion for Underwater Image Enhancement

    No full text
    We introduce an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption. Our method is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. It builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image. The two images to fusion, as well as their associated weight maps, are defined to promote the transfer of edges and color contrast to the output image. To avoid that the sharp weight map transitions create artifacts in the low frequency components of the reconstructed image, we also adapt a multiscale fusion strategy. Our extensive qualitative and quantitative evaluation reveals that our enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. Our validation also proves that our algorithm is reasonably independent of the camera settings, and improves the accuracy of several image processing applications, such as image segmentation and keypoint matching

    Color Balance and Fusion for Underwater Image Enhancement

    No full text

    Day and Night-Time Dehazing by Local Airlight Estimation

    No full text
    We introduce an effective fusion-based technique to enhance both day-time and night-time hazy scenes. When inverting the Koschmieder light transmission model, and by contrast with the common implementation of the popular dark-channel [1], we estimate the airlight on image patches and not on the entire image. Local airlight estimation is adopted because,under night-time conditions, the lighting generally arises from multiple localized artificial sources, and is thus intrinsically non-uniform. Selecting the sizes of the patches is, however, non-trivial.Small patches are desirable to achieve fine spatial adaptation to the atmospheric light, but large patches help improve the airlight estimation accuracy by increasing the possibility of capturing pixels with airlight appearance (due to severe haze). For this reason, multiple patch sizes are considered to generate several images, that are then merged together. The discrete Laplacian of the original image is provided as an additional input to the fusion process to reduce the glowing effect and to emphasize the finest image details. Similarly, for day-time scenes we apply the same principle but use a larger patch size. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition.Extensive experimental results demonstrate the effectiveness of our approach as compared with recent techniques, both in terms of computational efficiency and the quality of the outputs
    corecore